45 research outputs found

    High Resolution Surface Reconstruction of Cultural Heritage Objects Using Shape from Polarization Method

    Get PDF
    Nowadays, three-dimensional reconstruction is used in various fields like computer vision, computer graphics, mixed reality and digital twin. The three- dimensional reconstruction of cultural heritage objects is one of the most important applications in this area which is usually accomplished by close range photogrammetry. The problem here is that the images are often noisy, and the dense image matching method has significant limitations to reconstruct the geometric details of cultural heritage objects in practice. Therefore, displaying high-level details in three-dimensional models, especially for cultural heritage objects, is a severe challenge in this field. In this paper, the shape from polarization method has been investigated, a passive method with no drawbacks of active methods. In this method, the resolution of the depth maps can be dramatically increased using the information obtained from the polarization light by rotating a linear polarizing filter in front of a digital camera. Through these polarized images, the surface details of the object can be reconstructed locally with high accuracy. The fusion of polarization and photogrammetric methods is an appropriate solution for achieving high resolution three-dimensional reconstruction. The surface reconstruction assessments have been performed visually and quantitatively. The evaluations showed that the proposed method could significantly reconstruct the surfaces' details in the three-dimensional model compared to the photogrammetric method with 10 times higher depth resolution

    AUTOMATIC EXTRACTION OF CONTROL POINTS FROM 3D LIDAR MOBILE MAPPING AND UAV IMAGERY FOR AERIAL TRIANGULATION

    Get PDF
    Installing targets and measuring them as ground control points (GCPs) are time consuming and cost inefficient tasks in a UAV photogrammetry project. This research aims to automatically extract GCPs from 3D LiDAR mobile mapping system (L-MMS) measurements and UAV imagery to perform aerial triangulation in a UAV photogrammetric network. The L-MMS allows to acquire 3D point clouds of an urban environment including floors and facades of buildings with an accuracy of a few centimetres. Integration of UAV imagery, as complementary information enables to reduce the acquisition time of measurement as well as increasing the automation level in a production line. Therefore, a higher quality measurements and more diverse products are obtained. This research hypothesises that the spatial accuracy of the L-MMS is higher than that of the UAV photogrammetric point clouds. The tie points are extracted from the UAV imagery based on the well-known SIFT method, and then matched. The structure from motion (SfM) algorithm is applied to estimate the 3D object coordinates of the matched tie points. Rigid registration is carried out between the point clouds obtained from the L-MMS and the SfM. For each tie point extracted from the SfM point clouds, their corresponding neighbouring points are selected from the L-MMS point clouds, and then a plane is fitted and then a tie point was projected on the plane, and this is how the LiDAR-based control points (LCPs) are calculated. The re-projection error of the analyses carried out on a test data sets of the Glian area in Iran show a half pixel size accuracy standing for a few centimetres range accuracy. Finally, a significant increasing of speed up in survey operations besides improving the spatial accuracy of the extracted LCPs are achieved

    DEVELOPMENT OF A VOXEL BASED LOCAL PLANE FITTING FOR MULTI-SCALE REGISTRATION OF SEQUENTIAL MLS POINT CLOUDS

    Get PDF
    The Mobile Laser Scanner (MLS) system is one of the most accurate and fastest data acquisition systems for indoor and outdoor environments mapping. Today, to use this system in an indoor environment where it is impossible to capture GNSS data, Simultaneous Localization and Mapping (SLAM) is used. Most SLAM research has used probabilistic approaches to determine the sensor position and create a map, which leads to drift error in the final result due to their uncertainty. In addition, most SLAM methods give less importance to geometry and mapping concepts. This research aims to solve the SLAM problem by considering the adjustment concepts in mapping and geometrical principles of the environment and proposing an algorithm for reducing drift. For this purpose, a model-based registration is suggested. Correspondence points fall in the same voxel by voxelization, and the registration process is done using a plane model. In this research, two pyramid and simple registration methods are proposed. The results show that the simple registration algorithm is more efficient than the pyramid when the distance between sequential scans is not large otherwise, the pyramid registration is used. In the evaluation, by using simulated data in both pyramid and simple methods, 96.9% and 97.6% accuracy were obtained, respectively. The final test compares the proposed method with a SLAM method and ICP algorithm, which are described further

    SURFACE NORMAL RECONSTRUCTION USING POLARIZATION-UNET

    Get PDF
    Today, three-dimensional reconstruction of objects has many applications in various fields, and therefore, choosing a suitable method for high resolution three-dimensional reconstruction is an important issue and displaying high-level details in three-dimensional models is a serious challenge in this field. Until now, active methods have been used for high-resolution three-dimensional reconstruction. But the problem of active three-dimensional reconstruction methods is that they require a light source close to the object. Shape from polarization (SfP) is one of the best solutions for high-resolution three-dimensional reconstruction of objects, which is a passive method and does not have the drawbacks of active methods. The changes in polarization of the reflected light from an object can be analyzed by using a polarization camera or locating polarizing filter in front of the digital camera and rotating the filter. Using this information, the surface normal can be reconstructed with high accuracy, which will lead to local reconstruction of the surface details. In this paper, an end-to-end deep learning approach has been presented to produce the surface normal of objects. In this method a benchmark dataset has been used to train the neural network and evaluate the results. The results have been evaluated quantitatively and qualitatively by other methods and under different lighting conditions. The MAE value (Mean-Angular-Error) has been used for results evaluation. The evaluations showed that the proposed method could accurately reconstruct the surface normal of objects with the lowest MAE value which is equal to 18.06 degree on the whole dataset, in comparison to previous physics-based methods which are between 41.44 and 49.03 degree

    Pareto optimality solution of the multi-objective photogrammetric resection-intersection problem

    Get PDF
    Reconstruction of architectural structures from photographs has recently experienced intensive efforts in computer vision research. This is achieved through the solution of nonlinear least squares (NLS) problems to obtain accurate structure and motion estimates. In Photogrammetry, NLS contribute to the determination of the 3-dimensional (3D) terrain models from the images taken from photographs. The traditional NLS approach for solving the resection-intersection problem based on implicit formulation on the one hand suffers from the lack of provision by which the involved variables can be weighted. On the other hand, incorporation of explicit formulation expresses the objectives to be minimized in different forms, thus resulting in different parametric values for the estimated parameters at non-zero residuals. Sometimes, these objectives may conflict in a Pareto sense, namely, a small change in the parameters results in the increase of one objective and a decrease of the other, as is often the case in multi-objective problems. Such is often the case with error-in-all-variable (EIV) models, e.g., in the resection-intersection problem where such change in the parameters could be caused by errors in both image and reference coordinates.This study proposes the Pareto optimal approach as a possible improvement to the solution of the resection-intersection problem, where it provides simultaneous estimation of the coordinates and orientation parameters of the cameras in a two or multistation camera system on the basis of a properly weighted multi-objective function. This objective represents the weighted sum of the square of the direct explicit differences of the measured and computed ground as well as the image coordinates. The effectiveness of the proposed method is demonstrated by two camera calibration problems, where the internal and external orientation parameters are estimated on the basis of the collinearity equations, employing the data of a Manhattan-type test field as well as the data of an outdoor, real case experiment. In addition, an architectural structural reconstruction of the Merton college court in Oxford (UK) via estimation of camera matrices is also presented. Although these two problems are different, where the first case considers the error reduction of the image and spatial coordinates, while the second case considers the precision of the space coordinates, the Pareto optimality can handle both problems in a general and flexible way

    AUTOMATIC 3D MAPPING USING MULTIPLE UNCALIBRATED CLOSE RANGE IMAGES

    No full text
    Automatic three-dimensions modeling of the real world is an important research topic in the geomatics and computer vision fields for many years. By development of commercial digital cameras and modern image processing techniques, close range photogrammetry is vastly utilized in many fields such as structure measurements, topographic surveying, architectural and archeological surveying, etc. A non-contact photogrammetry provides methods to determine 3D locations of objects from two-dimensional (2D) images. Problem of estimating the locations of 3D points from multiple images, often involves simultaneously estimating both 3D geometry (structure) and camera pose (motion), it is commonly known as structure from motion (SfM). In this research a step by step approach to generate the 3D point cloud of a scene is considered. After taking images with a camera, we should detect corresponding points in each two views. Here an efficient SIFT method is used for image matching for large baselines. After that, we must retrieve the camera motion and 3D position of the matched feature points up to a projective transformation (projective reconstruction). Lacking additional information on the camera or the scene makes the parallel lines to be unparalleled. The results of SfM computation are much more useful if a metric reconstruction is obtained. Therefor multiple views Euclidean reconstruction applied and discussed. To refine and achieve the precise 3D points we use more general and useful approach, namely bundle adjustment. At the end two real cases have been considered to reconstruct (an excavation and a tower)

    THREE PRE-PROCESSING STEPS TO INCREASE THE QUALITY OF KINECT RANGE DATA

    No full text
    By developing technology with current rate, and increase in usage of active sensors in Close-Range Photogrammetry and Computer Vision, Range Images are the main extra data which has been added to the collection of present ones. Though main output of these data is point cloud, Range Images themselves can be considered important pieces of information. Being a bridge between 2D and 3D data enables it to hold unique and important attributes. There are 3 following properties that are taken advantage of in this study. First attribute to be considered is "Neighborhood of Null pixels" which will add a new field about accuracy of parameters into point cloud. This new field can be used later for data registration and integration. When there is a conflict between points of different stations we can abandon those with lower accuracy field. Next, polynomial fitting to known plane regions is applied. This step can help to soften final point cloud and just applies to some applications. Classification and region tracking in a series of images is needed for this process to be applicable. Finally, there is break-line created by errors of data transfer software. The break-line is caused by loss of some pixels in data transfer and store, and Image will shift along break-line. This error occurs usually when camera moves fast and processor can't handle transfer process entirely. The proposed method performs based on Edge Detection where horizontal lines are used to recognize break-line and near-vertical lines are used to determine shift value

    EFFECT OF DIGITAL FRINGE PROJECTION PARAMETERS ON 3D RECONSTRUCTION ACCURACY

    No full text
    3D reconstruction has been already one of the most interesting research areas among photogrammetry and computer vision researchers. This thesis aims to evaluate digital fringe projection method in reconstruction of small objects with complicated shape. Digital fringe projection method is a novel method in structured light technique which integrates interferometric and triangulation methods. In this method, a digital projector projects a series of sinusoidal fringe patterns onto the object surface. Then, a camera from a different point of view captures images of patterns that are deformed due to object's surface topography. Afterward, the captured images will be processed and the depth related phase would be calculated. Due to using arctangent function in the process of phase extraction, the computed phase ranges from –pi to +pi, so a phase unwrapping step is necessary. Finally, the unwrapped phase map would be converted to depth map with some mathematical models. This method has many advantages like high speed, high accuracy, low cost hardware, high resolution (each pixel will have a depth at end), and simple computations. This paper aims to evaluate different parameters which affect the accuracy of the final results. For this purpose, some test were designed and implemented. These tests assess the number of phase shifts, spatial frequency of the fringe pattern, light condition, noise level of images, and the color and material of target objects on the quality of resulted phase map. The evaluation results demonstrate that digital fringe projection method is capable of obtaining depth map of complicated object with high accuracy. The contrast test results showed that this method is able to work under different ambient light condition; although at places with high light condition will not work properly. The results of implementation on different objects with various materials, color and shapes demonstrate the high capability of this method of 3D reconstruction

    REACH SCALE APPLICATION OF UAV+SFM METHOD IN SHALLOW RIVERS HYPERSPATIAL BATHYMETRY

    No full text
    Nowadays, rivers are impacted by different human activities and highly regulated. To rehabilitate these systems, spatial and process-based analyses of rivers are essential. Hydrodynamic models are sophisticated tools in this regard and instream topography is one of the most important input of these models. To represent hyperspatial topography and bathymetry in shallow rivers, UAV imagery and structure from motion may be an optimum method considering the extent of application, vegetation condition and flow quality. However, at the present there is no available workflow for applications of UAV+SfM method in riverine environments at extent of reach or higher scales. Therefore, in this study a new workflow has been presented and evaluated in Alarm River. The evaluation showed that the workflow provides 2 m/s speed for UAV while mapping flight lines with low illumination changes. Specific pattern of image acquisition in the proposed workflow leads to substantial decrease of process time. In addition, precise control of flight height and overlap of images may lead to consistent accurate results. The result of validation against rtkGNSS data points showed that the suggested workflow is capable of providing 0.01 m-resolution topographic data with an error less than 0.075 m and 95% level of confidence in clear shallow rivers
    corecore